Revistas
Revista:
IEEE TRANSACTIONS ON LEARNING TECHNOLOGIES
ISSN:
1939-1382
Año:
2014
Vol.:
7
N°:
4
Págs.:
304 - 318
The combination of virtual reality interactive systems and educational technologies have been used in the training of procedural tasks, but there is a lack of research with regard to providing specific assistance for acquiring motor skills. In this paper we present a novel approach to evaluating motor skills with an interactive intelligent learning system based on the ULISES framework. We describe the implementation of the different layers that ULISES is composed of in order to generate a diagnosis of trainees' motor skills. This diagnostic process takes into account the following characteristics of movement: coordination, poses, movement trajectories and the procedure followed in a sequence of movements. In order to validate our work we generated a model for the diagnosis of tennis-related motor skills and we conducted an experiment in which we interpreted and diagnosed tennis serves of several subjects and which shows promising results.
Revista:
VIRTUAL REALITY
ISSN:
1359-4338
Año:
2014
Vol.:
18
N°:
3
Págs.:
161 - 171
This paper focuses on the simulation of bimanual assembly/disassembly operations for training or product design applications. Most assembly applications have been limited to simulate only unimanual tasks or bimanual tasks with one hand. However, recent research has introduced the use of two haptic devices for bimanual assembly. We propose a more natural and low-cost bimanual interaction than existing ones based on Markerless motion capture (Mocap) systems. Specifically, this paper presents two interactions based on a Markerless Mocap technology and one interaction based on combining Markerless Mocap technology with haptic technology. A set of experiments following a within-subjects design have been implemented to test the usability of the proposed interfaces. The Markerless Mocap-based interactions were validated with respect to two-haptic-based interactions, as the latter has been successfully integrated into bimanual assembly simulators. The pure Markerless Mocap interaction proved to be either the most or least efficient depending on the configuration (with 2D or 3D tracking, respectively). Usability results among the proposed interactions and the two-haptic based interaction showed no significant differences. These results suggest that Markerless Mocap or hybrid interactions are valid solutions for simulating bimanual assembly tasks when the precision of the motion is not critical. The decision on which technology to use should depend on the trade-off between the precision requested to simulate the task, the cost, and inner features of the technology.
Autores:
Velaz, Yaiza; Rodriguez Arce, Jorge; Gutierrez, T.; et al.
Revista:
JOURNAL OF COMPUTING AND INFORMATION SCIENCE IN ENGINEERING
ISSN:
1530-9827
This paper focuses on the use of virtual reality (VR) systems for teaching industrial assembly tasks and studies the influence of the interaction technology on the learning process. The experiment conducted follows a between-subjects design with 60 participants distributed in five groups. Four groups were trained on the target assembly task with a VR system, but each group used a different interaction technology: mouse-based, Phantom Omni (R) haptic, and two configurations of the Markerless Motion Capture (Mmocap) system (with 2D or 3D tracking of hands). The fifth group was trained with a video tutorial. A post-training test carried out the day after evaluated performance in the real task. The experiment studies the efficiency and effectiveness of each interaction technology for learning the task, taking in consideration both quantitative measures (such as training time, real task performance, evolution from the virtual task to real one), and qualitative data (user feedback from a questionnaire). Results show that there were no significant differences in the final performance among the five groups. However, users trained under mouse and 2D-tracking Mmocap systems took significantly less training time than the rest of the virtual modalities. This brings out two main outcomes: (1) the perception of collisions using haptics does not increase the learning transfer of procedural tasks demanding low motor skills and (2) Mmocap-based interactions can be valid for training this kind of tasks.